4 research outputs found

    64-bit architechtures and compute clusters for high performance simulations

    Get PDF
    Simulation of large complex systems remains one of the most demanding of high performance computer systems both in terms of raw compute performance and efficient memory management. Recent availability of 64-bit architectures has opened up the possibilities of commodity computers accessing more than the 4 Gigabyte memory limit previously enforced by 32-bit addressing. We report on some performance measurements we have made on two 64-bit architectures and their consequences for some high performance simulations. We discuss performance of our codes for simulations of artificial life models; computational physics models of point particles on lattices; and with interacting clusters of particles. We have summarised pertinent features of these codes into benchmark kernels which we discuss in the context of wellknown benchmark kernels of the 32-bit era. We report on how these these findings were useful in the context of designing 64-bit compute clusters for high-performance simulations

    Simulation modelling and visualisation: toolkits for building artificial worlds

    Get PDF
    Simulations users at all levels make heavy use of compute resources to drive computational simulations for greatly varying applications areas of research using different simulation paradigms. Simulations are implemented in many software forms, ranging from highly standardised and general models that run in proprietary software packages to ad hoc hand-crafted simulations codes for very specific applications. Visualisation of the workings or results of a simulation is another highly valuable capability for simulation developers and practitioners. There are many different software libraries and methods available for creating a visualisation layer for simulations, and it is often a difficult and time-consuming process to assemble a toolkit of these libraries and other resources that best suits a particular simulation model. We present here a break-down of the main simulation paradigms, and discuss differing toolkits and approaches that different researchers have taken to tackle coupled simulation and visualisation in each paradigm

    Optimising of resource utilisation at a university - an allocation problem

    Get PDF
    Student enrolment at the University of Natal has been increasing steadily over the years. Moreover additional new courses are introduced from time to time. Despite this State subsidies are declining in real terms. These factors imply escalating demands on physical resources. Historically, at this university, lecture rooms have been used only during the mornings and laboratories only during the afternoons. An obvious solution to meet the demand for accommodation is to double up on the number of timetabled periods so that the lecture rooms are in use the whole day. Since there are many classes which are in fact too large to be accommodated in any one room it is also necessary to split these classes into separate lecture groups. Likewise classes have to be divided up into several smaller groups for laboratory and tutorial sessions. The policy at this university is to encourage students to choose curricula including courses selected from as wide a range as possible. The above timetable strategy apparently facilitates this. In practice, however, to ensure that student numbers are evenly distributed across alternative sessions for a given course and to do this for all courses simultaneously while avoiding clashes is not a simple matter

    Evaluating parallel optimization on transputers

    No full text
    The faster processing power of modern computers and the development of efficient algorithms have made it possible for operations researchers to tackle a much wider range of problems than ever before. Further improvements in processing speed can be achieved utilising relatively inexpensive transputers to process components of an algorithm in parallel. The Davidon-Fletcher-Powell method is one of the most successful and widely used optimisation algorithms for unconstrained problems. This paper examines the algorithm and identifies the components that can be processed in parallel. The results of some experiments with these components are presented which indicates under what conditions parallel processing with an inexpensive configuration is likely to be faster than the traditional sequential implementations. The performance of the whole algorithm with its parallel components is then compared with the original sequential algorithm. The implementation serves to illustrate the practicalities of speeding up typical OR algorithms in terms of difficulty, effort and cost. The results give an indication of the savings in time a given parallel implementation can be expected to yield
    corecore